Goto

Collaborating Authors

 abuse material


Child abuse material 'systemic' on Elon Musk's X amid Grok scandal, Australian online safety regulator warned

The Guardian

Australia's eSafety commissioner wrote to X in January after its AI chatbot Grok was used to generate sexualised images of women and children online. Australia's eSafety commissioner wrote to X in January after its AI chatbot Grok was used to generate sexualised images of women and children online. Child abuse material'systemic' on Elon Musk's X amid Grok scandal, Australian online safety regulator warned The Australian online safety regulator warned Elon Musk's X amid the Grok sexualised image generation scandal that it found child abuse material was "particularly systemic" on X and more accessible than on "any other mainstream service", correspondence obtained by Guardian Australia reveals. The eSafety commissioner wrote to X in January after its chatbot, Grok, was used to generate sexualised images of women and children online, which the prime minister, Anthony Albanese, described as "abhorrent". In the letter, obtained by Guardian Australia under freedom of information laws, eSafety's general manager of regulatory operations, Heidi Snell, pointed to Musk's promise when taking over the platform in 2022 that "removing child exploitation is priority #1", but said "the availability of CSEM [child sexual exploitation material] continues to appear particularly systemic on X".


Child abuse increasing and more complex to police, crime agency says

BBC News

Child sex abuse is becoming increasingly complex to police and officers are arresting an average of 1,000 potential offenders each month, the National Crime Agency (NCA) says. It says an increasing reliance on online platforms and advances in technology, such as AI image creation, are exacerbating the problem, with algorithms and digital communities connecting offenders to share and promote child sex abuse material. According to the NCA, the number of arrests has roughly doubled in the past three years. Statistically, potential offenders are in every community and victims in every school, the NCA said. It added that police cannot address the issue alone and called on technology companies to do more.


Condemnation of Elon Musk's AI chatbot reached 'tipping point' after French raid, Australia's eSafety chief says

The Guardian

Australia's eSafety commissioner has welcomed the global regulatory focus on Elon Musk's X after this week's raid in France. Australia's eSafety commissioner has welcomed the global regulatory focus on Elon Musk's X after this week's raid in France. The eSafety commissioner, Julie Inman Grant, says global regulatory focus on Elon Musk's X has reached a "tipping point" after a raid of the company's offices in France this week. The raid on Tuesday was part of an investigation that included alleged offences of complicity in the possession and organised distribution of child abuse images, violation of image rights through sexualised deepfakes, and denial of crimes against humanity. A number of other countries - including the UK and Australia - and the EU have launched investigations in the past few weeks into X after its AI chatbot, Grok, was used to mass-produce sexualised images of women and children in response to user requests.


Paedophiles used AI to generate 3,440 child abuse videos in 2025, shocking report reveals - as experts call for immediate action to ban the 'frightening' technology

Daily Mail - Science & tech

Renee Good's last moments revealed as woman suffered FOUR gunshot wounds during deadly clash with ICE America's new hottest housing market is a leafy town with great prices where homes are selling at record speed Newsom backs away comment on his own office's his X account after post branded ICE'state-sponsored terrorism' A former Marine was unmasked as the'Zodiac killer' after a bombshell new investigation. I suffered a horrific side effect of a drug used by millions of Americans... and my face'melted off' 'Gigantic' refunds promised in bumper tax season... here's what you'll get and how it hits the US economy Inside the'stalled' investigation into Timothy Busfield: Explosive police claims about how Warner Bros kept star out of handcuffs Sharon Stone blasts'kids' who accused her of stealing seat at awards show in profane rant Case cracked after teen disappeared without a trace in 1965... as his skull is found 2,000 miles away riddled with bullet holes The rewarding, high-paid and storied trade job AI can't fill - but humans don't want Ellen Greenberg's fiancé Sam Goldberg breaks cover as feds reopen probe into her'suicide'... and late teacher's mother shares incredible sign sent from beyond the grave FedEx driver accused of abducting and killing little girl while delivering her Christmas present says he shouldn't be executed because he has autism Timothy Busfield told cops he wanted a'playful environment' on set after arrest for child sex crimes Rare look inside $65million Manhattan penthouse reveals the bewildering lifestyle perks of NYC's ultra-rich RICHARD EDEN: Meghan and Harry'plot' and why Prince William and Kate have REALLY hired a crisis expert. Paedophiles used AI to generate 3,440 child abuse videos in 2025, shocking report reveals - as experts call for immediate action to ban the'frightening' technology READ MORE: Musk's Grok chatbot restricts image generation after backlash Paedophiles used artificial intelligence ( AI) to generate more than 3,000 child abuse videos in 2025, a shocking report has revealed. Analysis conducted by the Internet Watch Foundation (IWF) found that last year was the worst on record for AI-generated child sex abuse material. The charity found a'frightening' 26,362 per cent increase in photo-realistic AI videos of child sexual abuse.


Tech companies and UK child safety agencies to test AI tools' ability to create abuse images

The Guardian

Kanishka Narayan, the minister for AI and online safety, said the measure was'ultimately stopping abuse before it happens'. Kanishka Narayan, the minister for AI and online safety, said the measure was'ultimately stopping abuse before it happens'. Tech companies and UK child safety agencies to test AI tools' ability to create abuse images Tech companies and child protection agencies will be given the power to test whether artificial intelligence tools can produce child abuse images under a new UK law. The announcement was made as a safety watchdog revealed that reports of AI-generated child sexual abuse material [CSAM] have more than doubled in the past year from 199 in 2024 to 426 in 2025. Under the change, the government will give designated AI companies and child safety organisations permission to examine AI models - the underlying technology for chatbots such as ChatGPT and image generators such as Google's Veo 3 - and ensure they have safeguards to prevent them from creating images of child sexual abuse .


AI Generated Child Sexual Abuse Material -- What's the Harm?

Ciardha, Caoilte Ó, Buckley, John, Portnoff, Rebecca S.

arXiv.org Artificial Intelligence

The development of generative artificial intelligence (AI) tools capable of producing wholly or partially synthetic child sexual abuse material (AI CSAM) presents profound challenges for child protection, law enforcement, and societal responses to child exploitation. While some argue that the harmfulness of AI CSAM differs fundamentally from other CSAM due to a perceived absence of direct victimization, this perspective fails to account for the range of risks associated with its production and consumption. AI has been implicated in the creation of synthetic CSAM of children who have not previously been abused, the revictimization of known survivors of abuse, the facilitation of grooming, coercion and sexual extortion, and the normalization of child sexual exploitation. Additionally, AI CSAM may serve as a new or enhanced pathway into offending by lowering barriers to engagement, desensitizing users to progressively extreme content, and undermining protective factors for individuals with a sexual interest in children. This paper provides a primer on some key technologies, critically examines the harms associated with AI CSAM, and cautions against claims that it may function as a harm reduction tool, emphasizing how some appeals to harmlessness obscure its real risks and may contribute to inertia in ecosystem responses.


Chatbot site depicting child sexual abuse images raises fears over misuse of AI

The Guardian

The IWF said it had been alerted to a chatbot site that offered scenarios including'child prostitute in a hotel' and'child and teacher alone after class'. The IWF said it had been alerted to a chatbot site that offered scenarios including'child prostitute in a hotel' and'child and teacher alone after class'. A chatbot site offering explicit scenarios with preteen characters, illustrated by illegal abuse images has raised fresh fears about the misuse of artificial intelligence. A report by a child safety watchdog has triggered calls for the UK government to impose safety guidelines on AI companies, amid a surge in child sexual abuse material (CSAM) created by the technology. The Internet Watch Foundation said it had been alerted to a chatbot site that offered a number of scenarios including "child prostitute in a hotel", "sex with your child while your wife is on holiday" and "child and teacher alone after class".


The most important tech stories of 2024, and also my favorite ones

The Guardian

Last week, we looked back at how 2024 made Elon Musk the world's most powerful man. Today, we're looking at a few other important themes that will influence the online and offline worlds in 2025. Google: Ruled an illegal monopoly in August, Google could be broken up. The results are anybody's guess, but what seemed impossible for a company worth 2.5tn is at play. The US has asked the judge in the case for a wholesale breakup of the giant, which would force it to divest Chrome, the world's most popular browser and one of Google's core businesses.


UK watchdog accuses Apple of failing to report sexual images of children

The Guardian

Apple is failing to effectively monitor its platforms or scan for images and videos of the sexual abuse of children, child safety experts allege, which is raising concerns about how the company can handle growth in the volume of such material associated with artificial intelligence. The UK's National Society for the Prevention of Cruelty to Children (NSPCC) accuses Apple of vastly undercounting how often child sexual abuse material (CSAM) appears in its products. In a year, child predators used Apple's iCloud, iMessage and Facetime to store and exchange CSAM in a higher number of cases in England and Wales alone than the company reported across all other countries combined, according to police data obtained by the NSPCC. Through data gathered via freedom of information requests and shared exclusively with the Guardian, the children's charity found Apple was implicated in 337 recorded offenses of child abuse images between April 2022 and March 2023 in England and Wales. In 2023, Apple made just 267 reports of suspected CSAM on its platforms worldwide to the National Center for Missing & Exploited Children (NCMEC), which is in stark contrast to its big tech peers, with Google reporting more than 1.47m and Meta reporting more than 30.6m, per NCMEC's annual report.


'Orwellian': EU's push to mass scan private messages on WhatsApp, Signal

Al Jazeera

The European Union is considering controversial proposals to mass scan private communications on encrypted messaging apps for child sex abuse material. Under the proposed legislation, photos, videos, and URLs sent on popular apps such as WhatsApp and Signal would be scanned by an artificial intelligence-powered algorithm against a government database of known abuse material. The Council of the EU, one of the bloc's two legislative bodies, is due to vote on the legislation, popularly known as Chat Control 2.0, on Thursday. If passed by the council, which represents the governments of the bloc's 27 member states, the proposals will move forward to the next legislative phase and negotiations on the exact terms of the law. While EU officials have argued that Chat Control 2.0 will help prevent child sex exploitation, encrypted messaging platforms and privacy advocates have fiercely opposed the proposals, likening them to the mass surveillance of George Orwell's 1984.